Skip to main content

English

Index

  1. Log in to the cluster
    1. Login nodes (logins)
    2. Password change
    3. Access from/to the outside
  2. Directories and file systems
    1. Basic directories under GPFS
    2. Storage space limits/quotas
  3. Running jobs
    1. Submit to queues
    2. Queue limits
    3. Interactive jobs

Log in to the cluster

IMPORTANT

The accounts are for personal and non-transferable usage. If the project requires someone else's access to the machine or any increase in the assigned resources, the project manager will be in charge to make this type of request.

Login nodes

{mn1,mn2,mn3}.bsc.es

All connections must be done through SSH (Secure SHell), for example:

mylaptop$> ssh {username}@mn1.bsc.es
mylaptop$> ssh {username}@mn2.bsc.es
mylaptop$> ssh {username}@mn3.bsc.es

Password change

For security reasons, you must change the first password.

To change your password, you have to login at Storage (Data Transfer machine):

mylaptop$> ssh {username}@dt01.bsc.es

with the same 'username' and 'password' than in the cluster. Then, you have to use the 'passwd' command.

The new password will become effective 10 minutes after the change.

Access from/to the outside

The login nodes are the only nodes accessible from the outside, but no connections are allowed from the cluster to the outside world for security reasons.

All file transfers from/to the outside must be executed from your local machine and not within the cluster:

Example to copy files or directories from MN4 to an external machine:
mylaptop$> scp -r {username}@dt01.bsc.es:"MN4_SOURCE_dir" "mylaptop_DEST_dir"
Example to copy files or directories from an external machine to MN4:
mylaptop$> scp -r "mylaptop_SOURCE_dir" {username}@dt01.bsc.es:"MN4_DEST_dir"

Directories and file systems

There are different partitions of disk space. Each area may have specific size limits and usage policies.

Basic directories under GPFS

GPFS (General Parallel File System, a distributed networked filesystem) can be accessed from all the nodes and Data Transfer Machine (dt01.bsc.es).

The available GPFS directories and file systems are:

  • /apps: over this filesystem reside applications and libraries already installed on the machine for everyday use. Users cannot write to it.

  • /gpfs/home: after login, it is the default work area where users can save source codes, scripts, and other personal data. The space quota is individual (with a relatively lessened capacity). Not recommended for run jobs; please run your jobs on your group’s /gpfs/projects or /gpfs/scratch instead.

  • /gpfs/projects: it's intended for data sharing between users of the same group or project. All members of the group share the space quota.

  • /gpfs/scratch: each user has their directory under this partition, for example, to store temporary job files during execution. All members of the group share the space quota.

Storage space limits/quotas

To check the limits of disk space, as well as the quotas of current use for each file system:

$> bsc_quota

Running jobs

Submit to queues

Jobs submission to the queue system have to be done through the Slurm directives, for example:

To submit a job:
$> sbatch {job_script}
To show all the submitted jobs:
$> squeue
To cancel a job:
$> scancel {job_id}

Queue limits

To check the limits for the queues (QoS) assigned to the project, you can do:

$> bsc_queues

Interactive jobs

Interactive nodes

MareNostrum4 provides five interactive nodes (login1:login5); this includes the three login nodes.

From any login node, you can access the rest through 'ssh':

login1$> ssh login5

These nodes are intended for editing, compiling, preparation and submission of batch executions.

caution

If some job-run needs more CPU time than allowed (10 minutes), it needs to be done through the batch system (Slurm).

Interactive sessions

Allocation of an interactive session has to be done through Slurm, for example:

To request an interactive session in the 'interactive' partition:
$> salloc --partition=interactive
Also:
$> salloc -p interactive
To request an interactive session on a compute node ('main' partition):
$> salloc -n 1 -c 4  # example to request 1 task, 4 CPUs (cores) per task
To request an interactive session on a non-shared (exclusive) compute node:
$> salloc --exclusive